87 research outputs found

    Functional Testing of Feature Model Analysis Tools: a Test Suite

    Get PDF
    A Feature Model (FM) is a compact representation of all the products of a software product line. Automated analysis of FMs is rapidly gaining importance: new operations of analysis have been proposed, new tools have been developed to support those operations and different logical paradigms and algorithms have been proposed to perform them. Implementing operations is a complex task that easily leads to errors in analysis solutions. In this context, the lack of specific testing mechanisms is becoming a major obstacle hindering the development of tools and affecting their quality and reliability. In this article, we present FaMa Test Suite, a set of implementation–independent test cases to validate the functionality of FM analysis tools. This is an efficient and handy mechanism to assist in the development of tools, detecting faults and improving their quality. In order to show the effectiveness of our proposal, we evaluated the suite using mutation testing as well as real faults and tools. Our results are promising and directly applicable in the testing of analysis solutions. We intend this work to be a first step toward the development of a widely accepted test suite to support functional testing in the community of automated analysis of feature models.CICYT TIN2009-07366CICYT TIN2006-00472Junta de Andalucía TIC-253

    Benchmarking on the Automated Analyses of Feature Models: a Preliminary Roadmap

    Get PDF
    The automated analysis of Feature Models (FMs) is becoming a well-established discipline. New analysis operations, tools and techniques are rapidly proliferating in this context. However, the lack of standard mechanisms to evaluate and compare the performance of different solutions is starting to hinder the progress of this community. To address this situation, we propose the creation of a benchmark for the automated analyses of FMs. This benchmark would enable the objective and repeatable comparison of tools and techniques as well as promoting collaboration among the members of the discipline. Creating a benchmark requires a community to share a common view of the problem faced and come to agreement about a number of issues related to the design, distribution and usage of the benchmark. in this paper, we take a first step toward that direction. in particular, we first describe the main issues to be addressed for the successful development and maintenance of the benchmark. Then, we propose a preliminary research agenda setting milestones and clarifying the types of contributions expected from the community

    Automated testing on the analysis of variability-intensive artifacts: An exploratory study with SAT Solvers

    Get PDF
    The automated detection of faults on variability analysis tools is a challenging task often infeasible due to the combinatorial com plexity of the analyses. In previous works, we successfully automated the generation of test data for feature model analysis tools using metamor phic testing. The positive results obtained have encouraged us to explore the applicability of this technique for the efficient detection of faults in other variability-intensive domains. In this paper, we present an auto mated test data generator for SAT solvers that enables the generation of random propositional formulas (inputs) and their solutions (expected output). In order to show the feasibility of our approach, we introduced 100 artificial faults (i.e. mutants) in an open source SAT solver and com pared the ability of our generator and three related benchmarks to detect them. Our results are promising and encourage us to generalize the tech nique, which could be potentially applicable to any tool dealing with variability such as Eclipse repositories or Maven dependencies analyzers.Comisión Interministerial de Ciencia y Tecnología (CICYT) SETI (TIN2009-07366)Junta de Andalucía P07-TIC-2533 (Isabel)Junta de Andalucía P10-TIC-5906 (THEOS

    SmarTest: A Test Case Prioritization Tool for Drupal

    Get PDF
    Test case prioritization techniques aim to identify the optimal ordering of tests to accelerate the detection of faults. The importance of these techniques has been recognized in the context of Software Product Lines (SPLs), where the potentially huge number of products makes testing extremely challenging. We found that the open source Drupal framework shares most of the principles and challenges of SPL development and it can be considered a real-world example of family of products. In a previous work, we represented the Drupal configuration space as a feature model and we collected extra functional information about its features from open repositories. Part of this data proved to be a good indicator of faults propensity in Drupal features. Thus, they become valuable assets to prioritize tests in individual Drupal products. In this paper, we present SmarTest, a test prioritization tool for accelerating the detection of faults in Drupal. SmarTest has been developed as an extension of the Drupal core testing system. SmarTest supports the prioritization of tests providing faster feedback and letting testers begin correcting critical faults earlier. Different test prioritization criteria can be selected in SmarTest, such as prioritization based on the number of commits made in the code, or based on the tests that failed in last executions. A customizable dashboard with significant system information to guide the testing is also provided by SmarTest at run-time. This work represents an interesting application of SPL-inspired testing techniques to real-world software systems, which could be applicable to other open-source SPLs.Ministerio de Economía y Competitividad BELI (TIN2015-70560-R)Junta de Andalucía P12-TIC-186

    Spectrum-Based Fault Localization in Model Transformations

    Get PDF
    Model transformations play a cornerstone role in Model-Driven Engineering (MDE), as they provide the essential mechanisms for manipulating and transforming models. The correctness of software built using MDE techniques greatly relies on the correctness of model transformations. However, it is challenging and error prone to debug them, and the situation gets more critical as the size and complexity of model transformations grow, where manual debugging is no longer possible. Spectrum-Based Fault Localization (SBFL) uses the results of test cases and their corresponding code coverage information to estimate the likelihood of each program component (e.g., statements) of being faulty. In this article we present an approach to apply SBFL for locating the faulty rules in model transformations. We evaluate the feasibility and accuracy of the approach by comparing the effectiveness of 18 different stateof- the-art SBFL techniques at locating faults in model transformations. Evaluation results revealed that the best techniques, namely Kulcynski2, Mountford, Ochiai, and Zoltar, lead the debugger to inspect a maximum of three rules to locate the bug in around 74% of the cases. Furthermore, we compare our approach with a static approach for fault localization in model transformations, observing a clear superiority of the proposed SBFL-based method.Comisión Interministerial de Ciencia y Tecnología TIN2015-70560-RJunta de Andalucía P12-TIC-186

    Towards the Automation of Metamorphic Testing in Model Transformations

    Get PDF
    Model transformations are the cornerstone of Model-Driven Engineering, and provide the essential mechanisms for manipulating and transforming models. Checking whether the output of a model transformation is correct is a manual and error-prone task, this is referred to as the oracle problem in the software testing literature. The correctness of the model transformation program is crucial for the proper generation of its output, so it should be tested. Metamorphic testing is a testing technique to alleviate the oracle problem consisting on exploiting the relations between different inputs and outputs of the program under test, so-called metamorphic relations. In this paper we give an insight into our approach to generically define metamorphic relations for model transformations, which can be automatically instantiated given any specific model transformation.Comisión Interministerial de Ciencia y Tecnología TIN2015-70560-RJunta de Andalucía TIC-5906Junta de Andalucía P12-TIC-186

    Domain-Specific Languages and Model Transformations for Soft ware Product Line

    Get PDF
    is tutorial introduces and demonstrates the use of Model-Driven Engineering in So ware Product Lines. In particular, it teaches participants about domain-speci c languages, metamodeling and modeling, and where these techniques can be best used (and where not). Along with modeling, tutorial teaches various model transformation approaches and how they can be e ectively used to bring so ware product lines to a di erent domain and to optimize them. e use of models for handling product variation is demonstrated with real-life examples from various industries and product lines.Comisión Interministerial de Ciencia y Tecnología TIN2015-70560-RJunta de Andalucía P12-TIC-186

    Towards the Definition of Test Coverage Criteria for RESTful Web APIs

    Get PDF
    Web APIs following the REST architectural style (so-called RESTful Web APIs) have become the de-facto standard for software integration. As RESTful APIs gain momentum, so does the testing of them. However, there is a lack of mechanisms to assess the adequacy of testing approaches in this context, which makes it difficult to mea sure and compare the effectiveness of different testing techniques. In this work-in-progress paper, we take a step forward towards a framework for the assessment and comparison of testing approaches for RESTful Web APIs. To that end, we propose a preliminary catalogue of test coverage criteria. These criteria measure the adequacy of test suites based on the degree to which they exercise the different input and output elements of RESTful Web services. To the best of our knowledge, this is the first at tempt to measure the adequacy of testing approaches for RESTful Web APIs.Ministerio de Economía y Competitividad BELI (TIN2015-70560-R)Ministerio de Ciencia, Innovación y Universidades RTI2018-101204-B-C21 (HORATIO)Ministerio de Educación, Cultura y Deporte FPU17/0407

    RESTest: automated black-box testing of RESTful web APIs

    Get PDF
    Testing RESTful APIs thoroughly is critical due to their key role in software integration. Existing tools for the automated generation of test cases in this domain have shown great promise, but their applicability is limited as they mostly rely on random inputs, i.e., fuzzing. In this paper, we present RESTest, an open source black box testing framework for RESTful web APIs. Based on the API specification, RESTest supports the generation of test cases using different testing techniques such as fuzzing and constraint-based testing, among others. RESTest is developed as a framework and can be easily extended with new test case generators and test writers for different programming languages. We evaluate the tool in two scenarios: offline and online testing. In the former, we show how RESTest can efficiently generate realistic test cases (test inputs and test oracles) that uncover bugs in real-world APIs. In the latter, we show RESTest’s capabilities as a continuous testing and monitoring framework. Demo video: https://youtu.be/1f_tjdkaCKo.Junta de Andalucía APOLO (US-1264651)Junta de Andalucía EKIPMENT-PLUS (P18-FR-2895)Ministerio de Ciencia, Innovación y Universidades RTI2018-101204-B-C21 (HORATIO)Ministerio de Educación, Cultura y Deporte FPU17/0407

    Automated inference of likely metamorphic relations for model transformations

    Get PDF
    Model transformations play a cornerstone role in Model-Driven Engineering (MDE) as they provide the essential mechanisms for manipulating and transforming models. Checking whether the output of a model transformation is correct is a manual and errorprone task, referred to as the oracle problem. Metamorphic testing alleviates the oracle problem by exploiting the relations among di erent inputs and outputs of the program under test, so-called metamorphic relations (MRs). One of the main challenges in metamorphic testing is the automated inference of likely MRs. This paper proposes an approach to automatically infer likely MRs for ATL model transformations, where the tester does not need to have any knowledge of the transformation. The inferred MRs aim at detecting faults in model transformations in three application scenarios, namely regression testing, incremental transformations and migrations among transformation languages. In the experiments performed, the inferred likely MRs have proved to be quite accurate, with a precision of 96.4% from a total of 4101 true positives out of 4254 MRs inferred. Furthermore, they have been useful for identifying mutants in regression testing scenarios, with a mutation score of 93.3%. Finally, our approach can be used in conjunction with current approaches for the automatic generation of test cases.Comisión Interministerial de Ciencia y Tecnología TIN2015-70560-RJunta de Andalucía P12-TIC-186
    corecore